Reducing large adaptation spaces in self-adaptive systems using machine learning

نویسندگان

چکیده

Modern software systems often have to cope with uncertain operation conditions, such as changing workloads or fluctuating interference in a wireless network. To ensure that these meet their goals uncertainties be mitigated. One approach realize this is self-adaptation equips system feedback loop. The loop implements four core functions – monitor, analyze, plan, and execute share knowledge the form of runtime models. For large number adaptation options, i.e., spaces, deciding which option select for may time consuming even infeasible within available window make an decision. This particularly case when rigorous analysis techniques are used formal verification at runtime, widely adopted. technique deal options reducing space using machine learning. State art has showed effectiveness technique, yet, systematic solution able handle different types lacking. In paper, we present ML2ASR+, short Machine Learning Adaptation Space Reduction Plus. Central ML2ASR+ configurable learning pipeline supports effective spaces threshold, optimization, setpoint goals. We evaluate two applications sizes spaces: Internet-of-Things application service-based system. results demonstrate can applied reduce hence decisions over 90%, negligible effect on realization

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Self-Adaptation in Learning Classifier Systems

The use and potential benefits of self-adaptive mutation operators are well-known within evolutionary computing. In this paper we begin by examining the use of self-adaptive mutation in Learning Classifier Systems. We implement the operator in the simple ZCS classifier and examine its performance in two maze environments. It is shown that, although no significant increase in performance is seen...

متن کامل

Declarative Systems for Large-Scale Machine Learning

In this article, we make the case for a declarative foundation for data-intensive machine learning systems. Instead of creating a new system for each specific flavor of machine learning task, or hardcoding new optimizations, we argue for the use of recursive queries to program a variety of machine learning algorithms. By taking this approach, database query optimization techniques can be utiliz...

متن کامل

using game theory techniques in self-organizing maps training

شبکه خود سازمانده پرکاربردترین شبکه عصبی برای انجام خوشه بندی و کوانتیزه نمودن برداری است. از زمان معرفی این شبکه تاکنون، از این روش در مسائل مختلف در حوزه های گوناگون استفاده و توسعه ها و بهبودهای متعددی برای آن ارائه شده است. شبکه خودسازمانده از تعدادی سلول برای تخمین تابع توزیع الگوهای ورودی در فضای چندبعدی استفاده می کند. احتمال وجود سلول مرده مشکلی اساسی در الگوریتم شبکه خودسازمانده به حسا...

Decentralized Adaptive Control of Large-Scale Non-affine Nonlinear Time-Delay Systems Using Neural Networks

In this paper, a decentralized adaptive neural controller is proposed for a class of large-scale nonlinear systems with unknown nonlinear, non-affine subsystems and unknown nonlinear time-delay interconnections. The stability of the closed loop system is guaranteed through Lyapunov-Krasovskii stability analysis. Simulation results are provided to show the effectiveness of the proposed approache...

متن کامل

Adaptive Supersampling Using Machine Learning Techniques

Previous work in adaptive supersampling methods have utilized algorithmic approaches to analyze properties of the object space or image space which might benefit from increased supersampling. These techniques generally increase the computational complexity of simple supersampling, reducing the potential performance gain from employing adaptive supersampling. In this paper, we describe an experi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Systems and Software

سال: 2022

ISSN: ['0164-1212', '1873-1228']

DOI: https://doi.org/10.1016/j.jss.2022.111341